7 research outputs found

    Retinal Artery/Vein Classification via Graph Cut Optimization

    Get PDF
    In many diseases with a cardiovascular component, the geometry of microvascular blood vessels changes. These changes are specific to arteries and veins, and can be studied in the microvasculature of the retina using retinal photography. To facilitate large-scale studies of artery/vein-specific changes in the retinal vasculature, automated classification of the vessels is required. Here we present a novel method for artery/vein classification based on local and contextual feature analysis of retinal vessels. For each vessel, local information in the form of a transverse intensity profile is extracted. Crossings and bifurcations of vessels provide contextual information. The local and contextual features are integrated into a non-submodular energy function, which is optimized exactly using graph cuts. The method was validated on a ground truth data set of 150 retinal fundus images, achieving an accuracy of 88.0% for all vessels and 94.0% for the six arteries and six veins with highest caliber in the image

    Pulmonary CT registration through supervised learning with convolutional neural networks

    No full text
    Deformable image registration can be time consuming and often needs extensive parameterization to perform well on a specific application. We present a deformable registration method based on a 3-D convolutional neural network, together with a framework for training such a network. The network directly learns transformations between pairs of 3-D images. The network is trained on synthetic random transformations which are applied to a small set of representative images for the desired application. Training, therefore, does not require manually annotated ground truth information on the deformation. The framework for the generation of transformations for training uses a sequence of multiple transformations at different scales that are applied to the image. This way, complex transformations with large displacements can be modeled without folding or tearing images. The methodology is demonstrated on public data sets of inhale-exhale lung CT image pairs which come with landmarks for evaluation of the registration quality. We show that a small training set can be used to train the network, while still allowing generalization to a separate pulmonary CT data set containing data from a different patient group, acquired using a different scanner and scan protocol. This approach results in an accurate and very fast deformable registration method, without a requirement for parameterization at test time or manually annotated data for training

    Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks

    No full text
    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors

    The truth is hard to make: validation of medical image registration

    No full text
    An unsolved problem in medical image analysis is validation of methods. In this paper we will focus on image registration and in particular on nonlinear image registration, which is one of the hardest analysis problems to validate. The paper covers currently used methods of validation, comparative challenges and public datasets, as well as some of our own work in this area

    Adversarial training and dilated convolutions for brain MRI segmentation

    No full text
    Convolutional neural networks (CNNs) have been applied to various automatic image segmentation tasks in medical image analysis, including brain MRI segmentation. Generative adversarial networks have recently gained popularity because of their power in generating images that are difficult to distinguish from real images. In this study we use an adversarial training approach to improve CNN-based brain MRI segmentation. To this end, we include an additional loss function that motivates the network to generate segmentations that are difficult to distinguish from manual segmentations. During training, this loss function is optimised together with the conventional average per-voxel cross entropy loss. The results show improved segmentation performance using this adversarial training procedure for segmentation of two different sets of images and using two different network architectures, both visually and in terms of Dice coefficients

    Domain-adversarial neural networks to address the appearance variability of histopathology images

    No full text
    Preparing and scanning histopathology slides consists of several steps, each with a multitude of parameters. The parameters can vary between pathology labs and within the same lab over time, resulting in significant variability of the tissue appearance that hampers the generalization of automatic image analysis methods. Typically, this is addressed with ad-hoc approaches such as staining normalization that aim to reduce the appearance variability. In this paper, we propose a systematic solution based on domain-adversarial neural networks. We hypothesize that removing the domain information from the model representation leads to better generalization. We tested our hypothesis for the problem of mitosis detection in breast cancer histopathology images and made a comparative analysis with two other approaches. We show that combining color augmentation with domain-adversarial training is a better alternative than standard approaches to improve the generalization of deep learning methods
    corecore